1,248 research outputs found

    CABALA: Collaborative Architectures based on Biometric Adaptable Layers and Activities

    Get PDF
    The lack of communication and of dynamic adaptation to working settings often hinder stable performances of subsystems of present multibiometric architectures. The calibration phase often uses a specific training set, so that (sub)systems are tuned with respect to well determined conditions. In this work we investigate the modular construction of systems according to CABALA (Collaborative Architectures based on Biometric Adaptable Layers and Activities) approach. Different levels of flexibility and collaboration are supported. The computation of system reliability (SRR), for each single response of each single subsystem, allows to address temporary decrease of accuracy due to adverse conditions (light, dirty sensors, etc.), by possibly refusing a poorly reliable response or by asking for a new recognition operation. Subsystems can collaborate at a twofold level, both in returning a jointly determined answer, and in co-evolving to tune to changing conditions. At the first level, single-biometric subsystems implement the N-Cross Testing Protocol: they work in parallel, but exchange information to reach the final response. At an higher level of interdependency, parameters of each subsystem can be dynamically optimized according to the behavior of their companions. To this aim, an additional Supervisor Module analyzes the single results and, in our present implementation, modifies the degree of reliability required from each subsystem to accept its future responses. The paper explores different combinations of these novel strategies. We demonstrate that as component collaboration increases, the same happens to both the overall system accuracy and to the ability to identify unstable subsystems. (C) 2011 Elsevier Ltd. All rights reserved

    FARO: FAce Recognition against Occlusions and Expression Variations

    Get PDF
    FARO: FAce Recognition Against Occlusions and Expression Variations Maria De Marsico, Member, IEEE, Michele Nappi, and Daniel Riccio Abstract—Face recognition is widely considered as one of the most promising biometric techniques, allowing high recognition rates without being too intrusive. Many approaches have been presented to solve this special pattern recognition problem, also addressing the challenging cases of face changes, mainly occurring in expression, illumination, or pose. On the other hand, less work can be found in literature that deals with partial occlusions (i.e., sunglasses and scarves). This paper presents FAce Recognition against Occlusions and Expression Variations (FARO) as a new method based on partitioned iterated function systems (PIFSs), which is quite robust with respect to expression changes and partial occlusions. In general, algorithms based on PIFSs compute a map of self-similarities inside the whole input image, searching for correspondences among small square regions. However, traditional algorithms of this kind suffer from local distortions such as occlusions. To overcome such limitation, information extracted by PIFS is made local by working independently on each face component (eyes, nose, and mouth). Distortions introduced by likely occlusions or expression changes are further reduced by means of an ad hoc distance measure. In order to experimentally confirm the robustness of the proposed method to both lighting and expression variations, as well as to occlusions, FARO has been tested using AR-Faces database, one of the main benchmarks for the scientific community in this context. A further validation of FARO performances is provided by the experimental results produced on Face Recognition Grand Challenge database

    HP2IFS: Head Pose estimation exploiting Partitioned Iterated Function Systems

    Full text link
    Estimating the actual head orientation from 2D images, with regard to its three degrees of freedom, is a well known problem that is highly significant for a large number of applications involving head pose knowledge. Consequently, this topic has been tackled by a plethora of methods and algorithms the most part of which exploits neural networks. Machine learning methods, indeed, achieve accurate head rotation values yet require an adequate training stage and, to that aim, a relevant number of positive and negative examples. In this paper we take a different approach to this topic by using fractal coding theory and particularly Partitioned Iterated Function Systems to extract the fractal code from the input head image and to compare this representation to the fractal code of a reference model through Hamming distance. According to experiments conducted on both the BIWI and the AFLW2000 databases, the proposed PIFS based head pose estimation method provides accurate yaw/pitch/roll angular values, with a performance approaching that of state of the art of machine-learning based algorithms and exceeding most of non-training based approaches

    Face Authentication using Speed Fractal Technique

    Get PDF
    In this paper, a new fractal based recognition method, Face Authentication using Speed Fractal Technique (FAST), is presented. The main contribution is the good compromise between memory requirements, execution time and recognition ratio. FAST is based on Iterated Function Systems (IFS) theory, largely studied in still image compression and indexing, but not yet widely used for face recognition. Indeed, Fractals are well known to be invariant to a large set of global transformations. FAST is robust with respect to meaningful variations in facial expression and to the small changes of illumination and pose. Another advantage of the FAST strategy consists in the speed up that it introduces. The typical slowness of fractal image compression is avoided by exploiting only the indexing phase, which requires time O(D log (D)), where D is the size of the domain pool. Lastly, the FAST algorithm compares well to a large set of other recognition methods, as underlined in the experimental results

    Entropy Based Template Analysis in Face Biometric Identification Systems

    Get PDF
    The accuracy of a biometric matching algorithm relies on its ability to better separate score distributions for genuine and impostor subjects. However, capture conditions (e.g. illumination or acquisition devices) as well as factors related to the subject at hand (e.g. pose or occlusions) may even take a generally accurate algorithm to provide incorrect answers. Techniques for face classification are still too sensitive to image distortion, and this limit hinders their use in large-scale commercial applications, which are typically run in uncontrolled settings. This paper will join the notion of quality with the further interesting concept of representativeness of a biometric sample, taking into account the case of more samples per subject. Though being of excellent quality, the gallery samples belonging to a certain subject might be very (too much) similar among them, so that even a moderately different sample of the same subject in input will cause an error. This seems to indicate that quality measures alone are not able to guarantee good performances. In practice, a subject gallery should include a sufficient amount of possible variations, in order to allow correct recognition in different situations. We call this gallery feature representativeness. A significant feature to consider together with quality is the sufficient representativeness of (each) subject’s gallery. A strategy to address this problem is to investigate the role of the entropy, which is computed over a set of samples of a same subject. The paper will present a number of applications of such a measure in handling the galleries of the different users who are registered in a system. The resulting criteria might also guide template updating, to assure gallery representativeness over time

    Web-Shaped Model for Head Pose Estimation: an Approach for Best Exemplar Selection

    Get PDF
    Head pose estimation is a sensitive topic in video surveillance/smart ambient scenarios since head rotations can hide/distort discriminative features of the face. Face recognition would often tackle the problem of video frames where subjects appear in poses making it quite impossible. In this respect, the selection of the frames with the best face orientation can allow triggering recognition only on these, therefore decreasing the possibility of errors. This paper proposes a novel approach to head pose estimation for smart cities and video surveillance scenarios, aiming at this goal. The method relies on a cascade of two models: the first one predicts the positions of 68 well-known face landmarks; the second one applies a web-shaped model over the detected landmarks, to associate each of them to a specific face sector. The method can work on detected faces at a reasonable distance and with a resolution that is supported by several present devices. Results of experiments executed over some classical pose estimation benchmarks, namely Point '04, Biwi, and AFLW datasets show good performance in terms of both pose estimation and computing time. Further results refer to noisy images that are typical of the addressed settings. Finally, examples demonstrate the selection of the best frames from videos captured in video surveillance conditions
    • …
    corecore